Todaylive.Online

How China is using AI news anchors to deliver its propaganda

Screengrab from a video from Chinese state-backed group called Storm-1376 showing an AI-generated newsreader.
Screengrab from a video from Chinese state-backed group called Storm-1376 showing an AI-generated newsreader. Photograph: Storm-1376

The news presenter has a deeply uncanny air as he delivers a partisan and pejorative message in Mandarin: Taiwan’s outgoing president, Tsai Ing-wen, is as effective as limp spinach, her period in office beset by economic under performance, social problems and protests.

“Water spinach looks at water spinach. Turns out that water spinach isn’t just a name,” says the presenter, in an extended metaphor about Tsai being “Hollow Tsai” – a pun related to the Mandarin word for water spinach.

This is not a conventional broadcast journalist, even if the lack of impartiality is no longer a shock. The anchor is generated by an artificial intelligence programme, and the segment is trying, albeit clumsily, to influence the Taiwanese presidential election.

The source and creator of the video are unknown, but the clip is designed to make voters doubt politicians who want Taiwan to remain at arm’s length from , which claims that the self-governing island is part of its territory. It is the latest example of a sub-genre of the AI-generated disinformation game: the deepfake news anchor or TV presenter.

Such avatars are proliferating on social networks, spreading state-backed propaganda. Experts do say this kind of video will continue to spread as the technology becomes more widely accessible.

“It does not need to be perfect,” said Tyler Williams, the director of investigations at Graphika, a disinformation research company. “If a user is just scrolling through X or , they are not picking up little nuances on a smaller screen.”

Beijing has already experimented with AI-generated news anchors. In 2018, , a digital news presenter, who promised to bring viewers the news “24 hours a day, 365 days a year”. Although the Chinese public is about the use of digital avatars in the media, Qiu Hao failed to catch on more widely.

China is at the forefront of the disinformation element of the trend. Last year, pro-China bot accounts on Facebook and X distributed AI-generated deepfake videos of news anchors representing a fictitious broadcaster called Wolf News. In one clip, the US government was accused of failing to deal with gun violence, while another highlighted China’s role at an international summit.

In a report released in April, Microsoft said Chinese state-backed cyber groups had , including the use of fake news anchors or TV-style presenters. In one clip cited by Microsoft, the AI-generated anchor made unsubstantiated claims about the private life of the ultimately successful pro-sovereignty candidate – Lai Ching-te – alleging he had fathered children outside marriage.

Microsoft said the news anchors were created by the CapCut video editing tool, developed by the Chinese company ByteDance, which owns TikTok.

Clint Watts, the general manager of Microsoft’s threat analysis centre, points to China’s official use of synthetic news anchors in its domestic media market, which has also allowed the country to hone the format. It has now become a tool for disinformation, although there has been little discernible impact so far.

“The Chinese are much more focused on trying to put AI into their systems – propaganda, disinformation – they moved there very quickly. They’re trying everything. It’s not particularly effective,” said Watts.

Third-party vendors such as CapCut offer the news anchor format as a template, so it is easy to adapt and produce in large volume.

There are also clips featuring avatars acting like a cross between a professional TV presenter and an influencer speaking direct to the camera. One video produced by a Chinese state-backed group called Storm 1376 – also known as Spamouflage – features an AI-generated blond, female presenter alleging the US and India are secretly selling weapons to the Myanmar military.

The overall effect is far from convincing. Despite a realistic-looking presenter, the video is undermined by a stiff voice that is clearly computer-generated. Other examples unearthed by NewsGuard, an organisation that monitors misinformation and disinformation, show a Spamouflage-linked TikTok account using AI-generated avatars to comment on US news stories such as food costs and . One video shows an avatar with a computer-generated voice under the slogan: “Is Walmart lying to you about the weight of their meat?”

NewsGuard said the avatar videos were part of a pro-China network that was “widening” before the US presidential election. It noted 167 accounts created since last year that were linked to Spamouflage.

Other nations have experimented with deepfake anchors. Iranian state-backed hackers recently in the United Arab Emirates to broadcast a deepfake newsreader delivering a report on the war in Gaza. On Friday the that the Islamic State terrorist group is using AI-generated news anchors – in helmet and fatigues – to broadcast propaganda.

And one European state is openly trying AI-generated presenters: Ukraine’s ministry of foreign affairs has , using the likeness of Rosalie Nombre – a Ukrainian singer and media personality who gave permission for her image to be used. The result is, at least, impressive.

Last year, for tagging content, stating that images or videos generated using AI should be clearly watermarked. But Jeffrey Ding, an assistant professor at George Washington University who focuses on technology, said it was an “open question” about how the tagging requirements would be enforced in practice, especially with regard to state propaganda.

And while China’s guidelines , the priority for Chinese regulators is “controlling information flows and making sure that the content being produced is not politically sensitive and does not cause societal disruption,” said Ding. That means that when it comes to fake news, “for the Chinese government, what counts as disinformation on the Taiwan front might be very different from what the proper or truer interpretation of disinformation is”.

Experts don’t believe the computer-made news anchors are effective dupes just yet: Tsai’s pro-sovereignty party won in Taiwan, despite the avatar’s best efforts. Macrina Wang, the deputy news verification editor at NewsGuard, said the avatar content she had seen was “quite crude” but was increasing in volume. To the trained eye these videos were obviously fake, she said, with stilted movement and a lack of shifting light or shadows on the avatar’s figure being among the giveaways. Nonetheless, some of the comments under the TikTok videos show that people have fallen for it.

“There is a risk that the average person thinks this [avatar] is a real person,” she said, adding that AI was making video content “more compelling, clickable and viral”.

Microsoft’s Watts said a more likely evolution of the newscaster tactic was footage of a real-life news anchor being manipulated rather than a fully AI-generated figure. We could see “any mainstream news media anchor being manipulated in a way to make them say something they didn’t say”, Watts said. That is “far more likely” than a fully synthetic effort.

In its report last month, Microsoft researchers said they had not encountered many examples of AI-generated content having an impact on the offline world.

“Rarely have nation-states’ employments of generative AI-enabled content achieved much reach across social media, and in only a few cases have we seen any genuine audience deception from such content,” the report read.

Instead, audiences are gravitating towards simple forgeries like fake text stories embossed with spoof media logos.

Watts said there was a chance that a fully-AI generated video could affect an election, but the tool to create such a clip did not exist yet. “My guess is the tool that is used with that video … isn’t even on the market yet.” The most effective AI video messenger may not be a newscaster yet. But it underlines the importance of video to states trying to sow confusion among voters.

Threat actors will also be waiting for an example of an AI-made video that grabs an audience’s attention – and then replicating it. Both OpenAI and Google have demonstrated in recent months, though neither has released their tools to the public.

“The effective use of synthetic personas in videos that people actually watch will happen in a commercial space first. And then you’ll see the threat actors move to that.”

Additional research by Chi Hui Lin